Stable Diffusion is a recent open-source image generation model comparable to proprietary models such as DALLE, Imagen, or Parti. Stable Diffusion comes with a safety filter that aims to prevent generating explicit images. Unfortunately, the filter is obfuscated and poorly documented. This makes it hard for users to prevent misuse in their applications, and to understand the filter's limitations and improve it. We first show that it is easy to generate disturbing content that bypasses the safety filter. We then reverse-engineer the filter and find that while it aims to prevent sexual content, it ignores violence, gore, and other similarly disturbing content. Based on our analysis, we argue safety measures in future model releases should strive to be fully open and properly documented to stimulate security contributions from the community.
translated by 谷歌翻译
逆增强学习(IRL)是从专家演示中推断奖励功能的强大范式。许多IRL算法都需要已知的过渡模型,有时甚至是已知的专家政策,或者至少需要访问生成模型。但是,对于许多现实世界应用,这些假设太强了,在这些应用程序中,只能通过顺序相互作用访问环境。我们提出了一种新颖的IRL算法:逆增强学习(ACEIRL)的积极探索,该探索积极探索未知的环境和专家政策,以快速学习专家的奖励功能并确定良好的政策。 Aceirl使用以前的观察来构建置信区间,以捕获合理的奖励功能,并找到关注环境最有用区域的勘探政策。 Aceirl是使用样品复杂性界限的第一种活动IRL的方法,不需要环境的生成模型。在最坏情况下,Aceirl与活性IRL的样品复杂性与生成模型匹配。此外,我们建立了一个与问题相关的结合,该结合将Aceirl的样品复杂性与给定IRL问题的次级隔离间隙联系起来。我们在模拟中对Aceirl进行了经验评估,发现它的表现明显优于更幼稚的探索策略。
translated by 谷歌翻译
加强学习(RL)通常假设访问明确指定的奖励功能,许多实际应用无法提供。取而代之的是,最近,更多的工作探索了从与人互动中学习该做什么。到目前为止,这些方法中的大多数方法都模仿人类(卑鄙的)理性,尤其是提供无偏见的反馈。我们认为这些模型过于简单,RL研究人员需要开发更现实的人类模型来设计和评估其算法。特别是,我们认为人类模型必须是个人,背景和动态的。本文呼吁从不同学科的研究中进行研究,以解决有关人类如何向AI提供反馈以及我们如何构建更强大的人类RL系统的关键问题。
translated by 谷歌翻译
我们以已知的奖励和未知的约束来研究顺序决策,这是由约束代表昂贵评估人类偏好(例如安全舒适的驾驶行为)的情况所激发的。我们将互动学习这些约束作为新的线性匪徒问题的挑战正式化,我们称之为约束的线性最佳臂识别。为了解决这个问题,我们提出了自适应约束学习(ACOL)算法。我们为约束线性最佳臂识别提供了一个依赖实例的下限,并表明Acol的样品复杂性与最坏情况下的下限匹配。在平均情况下,ACOL的样品复杂性结合仍然比简单方法的边界更紧密。在合成实验中,ACOL与Oracle溶液相同,并且表现优于一系列基准。作为应用程序,我们考虑学习限制,以代表驾驶模拟中的人类偏好。对于此应用,ACOL比替代方案要高得多。此外,我们发现学习偏好作为约束对驾驶场景的变化比直接编码奖励函数中的偏好更强大。
translated by 谷歌翻译
对于许多强化学习(RL)应用程序,指定奖励是困难的。本文考虑了一个RL设置,其中代理仅通过查询可以询问可以的专家来获取有关奖励的信息,例如,评估单个状态或通过轨迹提供二进制偏好。从如此昂贵的反馈中,我们的目标是学习奖励的模型,允许标准RL算法实现高预期的回报,尽可能少的专家查询。为此,我们提出了信息定向奖励学习(IDRL),它使用奖励的贝叶斯模型,然后选择要最大化信息增益的查询,这些查询是有关合理的最佳策略之间的返回差异的差异。与针对特定类型查询设计的先前主动奖励学习方法相比,IDRL自然地适应不同的查询类型。此外,它通过将焦点转移降低奖励近似误差来实现类似或更好的性能,从而降低奖励近似误差,以改善奖励模型引起的策略。我们支持我们的调查结果,在多个环境中进行广泛的评估,并具有不同的查询类型。
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
translated by 谷歌翻译
Cashews are grown by over 3 million smallholders in more than 40 countries worldwide as a principal source of income. As the third largest cashew producer in Africa, Benin has nearly 200,000 smallholder cashew growers contributing 15% of the country's national export earnings. However, a lack of information on where and how cashew trees grow across the country hinders decision-making that could support increased cashew production and poverty alleviation. By leveraging 2.4-m Planet Basemaps and 0.5-m aerial imagery, newly developed deep learning algorithms, and large-scale ground truth datasets, we successfully produced the first national map of cashew in Benin and characterized the expansion of cashew plantations between 2015 and 2021. In particular, we developed a SpatioTemporal Classification with Attention (STCA) model to map the distribution of cashew plantations, which can fully capture texture information from discriminative time steps during a growing season. We further developed a Clustering Augmented Self-supervised Temporal Classification (CASTC) model to distinguish high-density versus low-density cashew plantations by automatic feature extraction and optimized clustering. Results show that the STCA model has an overall accuracy of 80% and the CASTC model achieved an overall accuracy of 77.9%. We found that the cashew area in Benin has doubled from 2015 to 2021 with 60% of new plantation development coming from cropland or fallow land, while encroachment of cashew plantations into protected areas has increased by 70%. Only half of cashew plantations were high-density in 2021, suggesting high potential for intensification. Our study illustrates the power of combining high-resolution remote sensing imagery and state-of-the-art deep learning algorithms to better understand tree crops in the heterogeneous smallholder landscape.
translated by 谷歌翻译
Coronary Computed Tomography Angiography (CCTA) provides information on the presence, extent, and severity of obstructive coronary artery disease. Large-scale clinical studies analyzing CCTA-derived metrics typically require ground-truth validation in the form of high-fidelity 3D intravascular imaging. However, manual rigid alignment of intravascular images to corresponding CCTA images is both time consuming and user-dependent. Moreover, intravascular modalities suffer from several non-rigid motion-induced distortions arising from distortions in the imaging catheter path. To address these issues, we here present a semi-automatic segmentation-based framework for both rigid and non-rigid matching of intravascular images to CCTA images. We formulate the problem in terms of finding the optimal \emph{virtual catheter path} that samples the CCTA data to recapitulate the coronary artery morphology found in the intravascular image. We validate our co-registration framework on a cohort of $n=40$ patients using bifurcation landmarks as ground truth for longitudinal and rotational registration. Our results indicate that our non-rigid registration significantly outperforms other co-registration approaches for luminal bifurcation alignment in both longitudinal (mean mismatch: 3.3 frames) and rotational directions (mean mismatch: 28.6 degrees). By providing a differentiable framework for automatic multi-modal intravascular data fusion, our developed co-registration modules significantly reduces the manual effort required to conduct large-scale multi-modal clinical studies while also providing a solid foundation for the development of machine learning-based co-registration approaches.
translated by 谷歌翻译